Skip to content

[recipe] refactor: refactor ray trainer for separate recipe use. (fully async / one step off)#5184

Merged
ArronHZG merged 13 commits intoverl-project:mainfrom
meituan-search:refactor_fully_async_ray_trainer
Feb 6, 2026
Merged

[recipe] refactor: refactor ray trainer for separate recipe use. (fully async / one step off)#5184
ArronHZG merged 13 commits intoverl-project:mainfrom
meituan-search:refactor_fully_async_ray_trainer

Conversation

@ArronHZG
Copy link
Copy Markdown
Collaborator

@ArronHZG ArronHZG commented Feb 3, 2026

What does this PR do?

  • Add a new Ray Trainer class to facilitate reusing the core logic.
  • And fix fully async / one step off CI.
  • Currently, our parameter synchronization logic is still in a broken state.

CI break in #4280

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

@ArronHZG ArronHZG changed the title [recipe] refactor: refactor ray trainer for separate recipe use. (fully asyn / one step off) [recipe] refactor: refactor ray trainer for separate recipe use. (fully async / one step off) Feb 4, 2026
@ArronHZG ArronHZG marked this pull request as ready for review February 5, 2026 12:29
"""
如果 algorithm.rollout_correction.bypass_mode 为 False,则计算 old_log_prob
"""
# If local_triger_step == 1, load the training engine's parameters to the CPU
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'local_triger_step' is a typo.

trainer_future = executor.submit(self._create_trainer, config)
# Wait for both to complete
rollouter_future.result()
trainer_future.result()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggesting keep _create_rollouter and _create_trainer sequential until [pr 4792](#4792) is merged, considering standalone launching mode does not promise bundled allocation.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, We can make revisions after it's fully tested and running.

if self.local_trigger_step == 1:
self.actor_rollout_wg.save_model_to_cpu(1)
old_log_prob, old_log_prob_mfu = super()._compute_old_log_prob(batch)
elif self.local_trigger_step is not None:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still necessary? The default value of local_trigger_step is 1 now and at no case should it be None.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get it, i will change it.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

                if bypass_recomputing_logprobs:
                   ...
                else:  # Recompute old_log_probs
                    with marked_timer("old_log_prob", timing_raw, color="blue"):
                        old_log_prob, old_log_prob_mfu = self._compute_old_log_prob(batch)

_compute_old_log_prob will use in not bypass_recomputing_logprobs mode.

@ArronHZG ArronHZG requested a review from wuxibin89 February 6, 2026 06:15
@ArronHZG ArronHZG merged commit 32f2d3d into verl-project:main Feb 6, 2026
100 of 126 checks passed
Tjh-UKN pushed a commit to Tjh-UKN/verl that referenced this pull request Feb 13, 2026
…ly async / one step off) (verl-project#5184)

### What does this PR do?

* Add a new Ray Trainer class to facilitate reusing the core logic.
* And fix  fully async / one step off CI.
* Currently, our parameter synchronization logic is still in a broken
state.

CI break in verl-project#4280


### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [x] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
@ArronHZG ArronHZG deleted the refactor_fully_async_ray_trainer branch March 3, 2026 08:22
Lidang-Jiang added a commit to Lidang-Jiang/rllm that referenced this pull request Apr 3, 2026
verl 0.7.1 refactored fully_async_policy.ray_trainer into
separation.ray_trainer (PR verl-project/verl#5184). Update imports:

- FullyAsyncRayPPOTrainer → SeparateRayPPOTrainer
- FullyAsyncAgentLoopManager → AgentLoopManager
- fully_async_policy.fully_async_main → separation.utils

Fixes rllm-org#470

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>
JasonWei05 added a commit to rllm-org/rllm that referenced this pull request Apr 6, 2026
* fix(trainer): supplement dfed770 by adding missing update_weights in sdk trainer to fix vllm engine weight loss and Ascend PositionEmbedding OOB error

* Fix norm_adv_by_std_in_grpo read from algorithm not stepwise_advantage

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Add multi-server support to MCPEnvironment

* fix: update verl import paths for verl 0.7.1+ compatibility

verl 0.7.1 refactored fully_async_policy.ray_trainer into
separation.ray_trainer (PR verl-project/verl#5184). Update imports:

- FullyAsyncRayPPOTrainer → SeparateRayPPOTrainer
- FullyAsyncAgentLoopManager → AgentLoopManager
- fully_async_policy.fully_async_main → separation.utils

Fixes #470

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>

* test: add import path verification tests for verl 0.7.1

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>

* additional fixes of sdk trainer

* fix: migrate VerlBackend to new EngineWorker path (verl 0.7.1) (#483)

* fix: make VerlBackend work with new engine workers only

* fix code tool and reward import issues

* lazy import of autoprocessor

* fix: convert OmegaConf config to ActorConfig dataclass in CustomPPOLoss

The loss function wrapper was receiving a raw OmegaConf DictConfig
(with struct mode ON) from the VerlBackend, but ppo_loss expects a
Python ActorConfig dataclass with runtime-only fields like
global_batch_info. Use omega_conf_to_dataclass() to bridge the gap,
mirroring what Verl's own engine worker does at initialization.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* turn assertion into force conversion for non-disable legacy worker setting

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add hf_template tokenize_and_mask method + verl SFTTrainer compat

1. RLLMSFTDataset.__init__ now accepts processor and max_samples kwargs,
   matching verl's create_sft_dataset() call signature. Without this,
   using RLLMSFTDataset as custom_cls with verl's SFTTrainer(config)
   crashes with TypeError.

2. Add hf_template tokenization method that uses tokenizer.apply_chat_template()
   directly instead of rLLM's ChatTemplateParser. The existing cumulative/stepwise
   methods render tool calls as JSON-in-XML, which is wrong for models with native
   XML tool call format (e.g. Qwen3-Coder). The hf_template method produces the
   model's native format.

   Config: data.rllm.tokenize_and_mask_method: hf_template

* fix: handle signal.signal ValueError in non-main threads (#484)

Module-level `signal.signal(signal.SIGALRM, timeout_handler)` raises
`ValueError: signal only works in main thread` when taco.py is imported
in Ray worker threads (common during GRPO training with verl).

Wrap in try/except so the module can be safely imported from any thread.
The timeout handler is only functional in the main thread regardless.

* fix: resolve CI failures — E501 lint, tinker test deps, disable Claude actions (#486)

* fix: resolve CI failures — E501 lint errors, tinker test deps, disable Claude actions

- Fix all E501 (line > 200 chars) violations across ~45 Python files by
  wrapping long lines using standard Python continuation patterns
- Add per-file-ignores in pyproject.toml for 31 prompt/string-heavy files
  where long lines are intentional (agent prompts, system instructions)
- Add --extra dev to tinker CI workflow so pytest is available
- Disable Claude Code and Claude Code Review workflows due to credential issue

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* remove unused var

* keep fixing

* fix: run pre-commit on changed files only for PRs, show diffs on failure

- For pull requests: use --from-ref/--to-ref to only check files changed
  in the PR, matching local developer behavior
- For pushes to main: keep --all-files as a safety net
- Add --show-diff-on-fail so CI output shows exactly what needs fixing
- Add fetch-depth: 0 so git history is available for ref comparison

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: auto-format 21 files to fix ruff-format pre-commit failures (#487)

* style: auto-format 21 files with ruff-format to fix pre-commit failures

Apply ruff-format to pre-existing formatting issues across the codebase:
- Wrap long lines (dicts, function signatures, string literals)
- Collapse short multi-line forms that fit on one line
- Add missing trailing newline (conftest.py)
- Expand __all__ lists for readability

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: exclude notebooks from ruff-format pre-commit hook

The ruff-format hook was missing the .ipynb exclusion that the ruff lint
hook already had, causing pre-commit to fail on notebook formatting.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Integrate fully async training to UnifiedTrainer (#481)

* init new feature on unified fully async design

* add coordinator control and refactor queue

* cherrypick Kyle's async design refinements from kyle/deepresearch

Adopts core async architecture improvements: BufferedEpisodeGroup with
EpisodeGroupAccumulator, simplified SyncCoordinator with throttle and
pause/resume, fire-and-forget generation loop, streaming gradient
accumulation, and weight sync gate mechanism on RolloutEngine.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Refactor chat parser and migrate experimental rollout to engine (#435)

* start refactoring

* revert chat template parser and override tinker parser test

* revert and fix chat parser test

* refactor tinker engine to use tinker parser

* deprecate bypass renderer mentions

* move experimental rollout out

* dump changes to rollout_engine into main file

* refactor base rollout engine class to standardize gating behaviors

* make tinker backend fully compatible

* merge Kyle's fork

* bump vllm, deepcopy msgs in Step's post_init

* [wip] make fully-async unified trainer compatible with agent flow engines

* fix staleness thottling

* enfore concurrency across engines

* fix fully async, refactor metrics

* revert engine/rollout to main, restore experimental/rollout engines

Move enhanced rollout engines (tinker, verl, completer, types) back to
rllm/experimental/rollout/ and revert rllm/engine/rollout/ to match main.
Fix import paths in experimental code and tinker backend/transform.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* revert TinkerChatTemplateParser and parser changes for separate PR

Revert parser files to main (tinker_parser.py, conftest, tests, __init__,
chat_template_parser, utils). Revert tinker_engine to main's ChatTemplateParser
approach, keeping only super().__init__() and _get_model_response rename.
Also restore pyproject.toml to main.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* revert bypass_render_with_parser and tinker parser-related changes

Revert config, docs, examples, and rollout files that referenced
bypass_render_with_parser (now staying in tinker_engine since we
reverted to main's ChatTemplateParser approach). Clean up tinker_backend
to only retain async-related changes.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* remove engine/gateway-level gate mechanism

The per-request gate on RolloutEngine is unnecessary:
- partial_rollout=True: verl handles abort/resume at server level,
  Tinker hot-swaps weights in place
- partial_rollout=False: coordination happens at task dispatch level
  (coordinator pause/resume), not per-request

Remove close_gate/open_gate/wait_for_gate/wait_for_drain from
RolloutEngine, GatewayManager, and model-gateway proxy/server/client.
Remove needs_weight_sync_gate from BackendProtocol.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* refactor: move task tracking to coordinator, revert validation rename, cleanup

- Move _in_flight_tasks tracking from UnifiedTrainer to SyncCoordinator
- Add epoch start/end hooks to async generation loop
- Remove dead _EPISODE_STRIP_KEYS constants from buffer
- Revert is_validation rename in engine/ (defer to future PR)
- Restore rllm-model-gateway/ to main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* restore load_balancer assertion in verl_engine, revert tool_base to main

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: add future annotations to rollout_engine for TYPE_CHECKING imports

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* style: fix ruff lint and format issues on unified-fully-async branch

Auto-fixed import sorting, unused imports, and formatting across 13 files.
Manual fixes: TYPE_CHECKING import for tqdm in buffer.py, isinstance union
syntax in metrics.py, moved logger below imports in unified_trainer.py,
split long log line.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* fix(verl): disable vllm compile cache to work around corruption bug (#490)

---------

Signed-off-by: Lidang-Jiang <lidangjiang@gmail.com>
Co-authored-by: ZhihaoSun <bitszh3271@163.com>
Co-authored-by: Zakir Jiwani <108548454+JiwaniZakir@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: taivu1998 <46636857+taivu1998@users.noreply.github.com>
Co-authored-by: Lidang-Jiang <lidangjiang@gmail.com>
Co-authored-by: Kyle Montgomery <54512765+kylemontgomery1@users.noreply.github.com>
Co-authored-by: listar2000 <35262801+listar2000@users.noreply.github.com>
Co-authored-by: yifannnwu <yifannn.wu@gmail.com>
Co-authored-by: Yifan Wu <17992118+yifannnwu@users.noreply.github.com>
Co-authored-by: Bryan Lu <55512809+luyuzhe111@users.noreply.github.com>
DaizeDong pushed a commit to DaizeDong/verl that referenced this pull request Apr 19, 2026
…ly async / one step off) (verl-project#5184)

### What does this PR do?

* Add a new Ray Trainer class to facilitate reusing the core logic.
* And fix  fully async / one step off CI.
* Currently, our parameter synchronization logic is still in a broken
state.

CI break in verl-project#4280


### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `veomni`, `sglang`, `vllm`,
`rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`,
`deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`,
`model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data`, `cfg`, `reward`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [x] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [x] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
- [ ] If your PR is related to the `recipe` submodule, please also
update the reference to the submodule commit via `git submodule update
--remote` or `cd recipe && git pull origin main`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants